EN FR
EN FR


Section: New Results

Crossing the Chasm

Participants : Alejandro Arbelaez, Anne Auger, Robert Busa-Fekete, Nikolaus Hansen, Balázs Kégl, Manuel Loth, Nadjib Lazaar, Marc Schoenauer, Michèle Sebag.

Due to the departure of both PhD students funded within the Microsoft-INRIA joint lab after their successful defenses (Alvaro Fialho and Alejandro Arbelaez), some of the activities of this SIG have been slightly redefined this year, with the one-month visit of Prof. Th. Runarsson (University of Iceland) in October, and the arrival in November of two new post-docs, also funded by the joint lab (Nadjib Lazaar and Manuel Loth). A new direction of research has appeared, in line with both Adaptive Operator Selection (Alvaro Fialho's PhD) and Continuous Search (Alejandro Arbelaez' PhD).

Bandit-based choice of heuristics in combinatorial optimization

This new direction of research deals with heuristic choice within an existing combinatorial solver using bandit-like algorithms, and the very first results deal with scheduling problems and will be published in early 2012 [57] .

Adaptive Operator Selection

In line with his PhD work, Alvaro Fialho has succesfully used his Adaptive Operator Selection method to the on-line tuning of Differential Evolution in the multi-objective case [96] .

Learn and Optimize (LaO),

an instance-based parameter-tuning method. Though originally designed for Divide-And-Evolve framework (see Section 6.2 ), LaO is a generic method that learns the relationship between some instance features and the optimal parameters of the optimizer. The current version [49] , [50] , [51] uses Neural Network to directly learn the optimal parameters, and average performance increase compared to the default parameter set (that has won the temporal track in the IPC7 competition) is of more than 10%. On-going work uses rankSVM to learn a partial order on the features × parameter space.

Adaptive Constraint Programming

Alejandro Arbelaez defended his PhD on May 31., led under the supervision of Youssef Hamadi and Michèle Sebag [1] . A survey of his PhD work has been published as a book chapter [74] and some of his last work was more concerned with optimizing the collaboration in distributed SAT solving in highly parallel environments [13] .

Ranking by calibrated AdaBoost

In [22] , [21] we describe a learning-to-rank technique based on calibrated multi-class classification. We train a set of multi-class classifiers using AdaBoost.MH, we calibrate them using various techniques to obtain diverse class probability estimates, and, finally, we approximate the Bayes-scoring function (which optimizes the popular Information Retrieval performance measure NDCG ), through mixing these estimates into an ultimate scoring function. Our method outperforms many standard ranking algorithms on the LETOR benchmark datasets, most of which are based on significantly more complex learning to rank algorithms than ours.